65 research outputs found

    Blockchain technology research and application: a systematic literature review and future trends

    Full text link
    Blockchain, as the basis for cryptocurrencies, has received extensive attentions recently. Blockchain serves as an immutable distributed ledger technology which allows transactions to be carried out credibly in a decentralized environment. Blockchain-based applications are springing up, covering numerous fields including financial services, reputation system and Internet of Things (IoT), and so on. However, there are still many challenges of blockchain technology such as scalability, security and other issues waiting to be overcome. This article provides a comprehensive overview of blockchain technology and its applications. We begin with a summary of the development of blockchain, and then give an overview of the blockchain architecture and a systematic review of the research and application of blockchain technology in different fields from the perspective of academic research and industry technology. Furthermore, technical challenges and recent developments are also briefly listed. We also looked at the possible future trends of blockchain

    Theoretical insight into vibrational spectra of metal-water interfaces from density functional theory based molecular dynamics

    Get PDF
    Acknowledgements J. C. is grateful for funding support from the National Natural Science Foundation of China (Grants No. 21373166 and 21621091), and the Thousand Youth Talents Program of China.Peer reviewedPostprin

    Neurologic Abnormalities in Workers of a 1-Bromopropane Factory

    Get PDF
    We reported recently that 1-bromopropane (1-BP; n-propylbromide, CAS Registry no. 106-94-5), an alternative to ozone-depleting solvents, is neurotoxic and exhibits reproductive toxicity in rats. The four most recent case reports suggested possible neurotoxicity of 1-BP in workers. The aim of the present study was to establish the neurologic effects of 1-BP in workers and examine the relationship with exposure levels. We surveyed 27 female workers in a 1-BP production factory and compared 23 of them with 23 age-matched workers in a beer factory as controls. The workers were interviewed and examined by neurologic, electrophysiologic, hematologic, biochemical, neurobehavioral, and postural sway tests. 1-BP exposure levels were estimated with passive samplers. Tests with a tuning fork showed diminished vibration sensation of the foot in 15 workers exposed to 1-BP but in none of the controls. 1-BP factory workers showed significantly longer distal latency in the tibial nerve than did the controls but no significant changes in motor nerve conduction velocity. Workers also displayed lower values in sensory nerve conduction velocity in the sural nerve, backward recalled digits, Benton visual memory test scores, pursuit aiming test scores, and five items of the Profile of Mood States (POMS) test (tension, depression, anxiety, fatigue, and confusion) compared with controls matched for age and education. Workers hired after May 1999, who were exposed to 1-BP only (workers hired before 1999 could have also been exposed to 2-BP), showed similar changes in vibration sense, distal latency, Benton test scores, and depression and fatigue in the POMS test. Time-weighted average exposure levels in the workers were 0.34–49.19 ppm. Exposure to 1-BP could adversely affect peripheral nerves or/and the central nervous system

    Dissecting whole-brain conduction delays through MRI microstructural measures

    Get PDF
    Network models based on structural connectivity have been increasingly used as the blueprint for large-scale simulations of the human brain. As the nodes of this network are distributed through the cortex and interconnected by white matter pathways with different characteristics, modeling the associated conduction delays becomes important. The goal of this study is to estimate and characterize these delays directly from the brain structure. To achieve this, we leveraged microstructural measures from a combination of advanced magnetic resonance imaging acquisitions and computed the main determinants of conduction velocity, namely axonal diameter and myelin content. Using the model proposed by Rushton, we used these measures to calculate the conduction velocity and estimated the associated delays using tractography. We observed that both the axonal diameter and conduction velocity distributions presented a rather constant trend across different connection lengths, with resulting delays that scale linearly with the connection length. Relying on insights from graph theory and Kuramoto simulations, our results support the approximation of constant conduction velocity but also show path- and region-specific differences

    Machine Learning Approaches to Predict Risks of Diabetic Complications and Poor Glycemic Control in Nonadherent Type 2 Diabetes

    Get PDF
    Purpose: The objective of this study was to evaluate the efficacy of machine learning algorithms in predicting risks of complications and poor glycemic control in nonadherent type 2 diabetes (T2D).Materials and Methods: This study was a real-world study of the complications and blood glucose prognosis of nonadherent T2D patients. Data of inpatients in Sichuan Provincial People’s Hospital from January 2010 to December 2015 were collected. The T2D patients who had neither been monitored for glycosylated hemoglobin A nor had changed their hyperglycemia treatment regimens within the last 12 months were the object of this study. Seven types of machine learning algorithms were used to develop 18 prediction models. The predictive performance was mainly assessed using the area under the curve of the testing set.Results: Of 800 T2D patients, 165 (20.6%) met the inclusion criteria, of which 129 (78.2%) had poor glycemic control (defined as glycosylated hemoglobin A ≥7%). The highest area under the curves of the testing set for diabetic nephropathy, diabetic peripheral neuropathy, diabetic angiopathy, diabetic eye disease, and glycosylated hemoglobin A were 0.902 ± 0.040, 0.859 ± 0.050, 0.889 ± 0.059, 0.832 ± 0.086, and 0.825 ± 0.092, respectively.Conclusion: Both univariate analysis and machine learning methods reached the same conclusion. The duration of T2D and the duration of unadjusted hypoglycemic treatment were the key risk factors of diabetic complications, and the number of hypoglycemic drugs was the key risk factor of glycemic control of nonadherent T2D. This was the first study to use machine learning algorithms to explore the potential adverse outcomes of nonadherent T2D. The performances of the final prediction models we developed were acceptable; our prediction performances outperformed most other previous studies in most evaluation measures. Those models have potential clinical applicability in improving T2D care

    Electrocatalytic reduction of CO2 to ethylene and ethanol through hydrogen-assisted C-C coupling over fluorine-modified copper

    Get PDF
    精准控制C1分子C-C偶联合成特定C2+化合物是C1化学中极具挑战性的难题。由于C2+化合物(如乙烯和乙醇)在化工和能源领域具有重要用途,将CO2直接转化为C2+产物极具吸引力。发展高效催化剂,实现高电流密度、高C2+选择性、高稳定性的“三高”性能,是推进电催化还原CO2走向实际应用的关键。研究团队针对电催化还原CO2中高CO2还原法拉第效率的催化剂常常活性低的问题,提出了适当提高催化剂活化水的能力对增加CO2还原活性的重要性,发展出氢助碳碳偶联(hydrogen-assisted C-C coupling)的新策略,在氟修饰的铜(F-Cu)催化剂上实现了CO2电催化还原制乙烯和乙醇的新突破。该研究工作实验部分主要由王野、张庆红教授指导,能源材料协同创新中心iChEM2016级博士生马文超、固体表面物理化学国家重点实验室高级工程师谢顺吉(共同第一作者)完成;理论计算部分由程俊教授指导,2017级硕士生刘彤彤(共同第一作者)、2016级博士生樊祺源完成。叶进裕博士为原位红外测试提供了支持。上海光源姜政研究员、孙凡飞博士、杨若欧为同步辐射表征提供了支持。 这是投稿的最终版本,正式出版的论文版本请访问官方链接(https://doi.org/10.1038/s41929-020-0450-0)。Electrocatalytic reduction of CO2 into multi-carbon (C2+) products is a highly attractive route for CO2 utilization. However, the yield of C2+ products remains low because of the limited C2+ selectivity at high CO2 conversion rate. Here, we report a fluorine-modified copper catalyst that exhibits an ultrahigh current density of 1.6 A cm−2 at C2+ (mainly ethylene and ethanol) Faradaic efficiency of 80% for electrocatalytic CO2 reduction in a flow cell. The C2-4 selectivity reaches 85.8% at a single-pass yield of 16.5%. We show a hydrogen-assisted C−C coupling mechanism between adsorbed formyl (CHO) intermediates for C2+ formation. Fluorine enhances water activation, CO adsorption and hydrogenation of adsorbed CO to CHO intermediate that can readily undergo coupling. Our findings offer an opportunity to design highly active and selective CO2 electroreduction catalysts with potential for practical applicationThis work was supported by the National Key Research and Development Program of the Ministry of Science and Technology of China (No. 2017YFB0602201), the National Natural Science Foundation of China (Nos. 21690082, 91545203, 21503176 and 21802110), We thank staffs at the BL14W1 beamline of the Shanghai Synchrotron Radiation Facilities (SSRF) for assistance with the EXAFS measurements.研究工作得到科技部重点研发计划(批准号:2017YFB0602201)和国家自然科学基金(批准号:21690082、91545203、21503176、21802110)项目的资助

    Estimating axial diffusivity in the NODDI model

    Get PDF
    To estimate microstructure-related parameters from diffusion MRI data, biophysical models make strong, simplifying assumptions about the underlying tissue. The extent to which many of these assumptions are valid remains an open research question. This study was inspired by the disparity between the estimated intra-axonal axial diffusivity from literature and that typically assumed by the Neurite Orientation Dispersion and Density Imaging (NODDI) model ( d ∥ = 1.7 μ m 2 /ms ). We first demonstrate how changing the assumed axial diffusivity results in considerably different NODDI parameter estimates. Second, we illustrate the ability to estimate axial diffusivity as a free parameter of the model using high b-value data and an adapted NODDI framework. Using both simulated and in vivo data we investigate the impact of fitting to either real-valued or magnitude data, with Gaussian and Rician noise characteristics respectively, and what happens if we get the noise assumptions wrong in this high b-value and thus low SNR regime. Our results from real-valued human data estimate intra-axonal axial diffusivities of ∼ 2 − 2.5 μ m 2 /ms , in line with current literature. Crucially, our results demonstrate the importance of accounting for both a rectified noise floor and/or a signal offset to avoid biased parameter estimates when dealing with low SNR data

    Validation of Deep Learning techniques for quality augmentation in diffusion MRI for clinical studies

    Get PDF
    The objective of this study is to evaluate the efficacy of deep learning (DL) techniques in improving the quality of diffusion MRI (dMRI) data in clinical applications. The study aims to determine whether the use of artificial intelligence (AI) methods in medical images may result in the loss of critical clinical information and/or the appearance of false information. To assess this, the focus was on the angular resolution of dMRI and a clinical trial was conducted on migraine, specifically between episodic and chronic migraine patients. The number of gradient directions had an impact on white matter analysis results, with statistically significant differences between groups being drastically reduced when using 21 gradient directions instead of the original 61. Fourteen teams from different institutions were tasked to use DL to enhance three diffusion metrics (FA, AD and MD) calculated from data acquired with 21 gradient directions and a b-value of 1000 s/mm2. The goal was to produce results that were comparable to those calculated from 61 gradient directions. The results were evaluated using both standard image quality metrics and Tract-Based Spatial Statistics (TBSS) to compare episodic and chronic migraine patients. The study results suggest that while most DL techniques improved the ability to detect statistical differences between groups, they also led to an increase in false positive. The results showed that there was a constant growth rate of false positives linearly proportional to the new true positives, which highlights the risk of generalization of AI-based tasks when assessing diverse clinical cohorts and training using data from a single group. The methods also showed divergent performance when replicating the original distribution of the data and some exhibited significant bias. In conclusion, extreme caution should be exercised when using AI methods for harmonization or synthesis in clinical studies when processing heterogeneous data in clinical studies, as important information may be altered, even when global metrics such as structural similarity or peak signal-to-noise ratio appear to suggest otherwise
    corecore